Kubernetes集群环境搭建

kubernetes集群环境搭建

一、前置知识点

目前生产部署Kubernetes 集群主要有两种方式:

kubeadm

Kubeadm 是一个K8s 部署工具,提供kubeadm init 和kubeadm join,用于快速部署Kubernetes 集群。

官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

二进制包

从github 下载发行版的二进制包,手动部署每个组件,组成Kubernetes 集群。

Kubeadm 降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes 集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。

二、kubeadm 部署方式介绍

kubeadm 是官方社区推出的一个用于快速部署kubernetes 集群的工具,这个工具能通过两条指令完成一个kubernetes 集群的部署:

  • 创建一个Master 节点kubeadm init
  • 将Node 节点加入到当前集群中$ kubeadm join <Master 节点的IP 和端口>

三、安装要求

在开始之前,部署Kubernetes 集群机器需要满足以下几个条件:

  • 一台或多台机器,操作系统CentOS7.x-86_x64
  • 硬件配置:2GB 或更多RAM,2 个CPU 或更多CPU,硬盘30GB 或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap 分区

四、最终目标

  • 在所有节点上安装Docker 和kubeadm
  • 部署Kubernetes Master
  • 部署容器网络插件
  • 部署Kubernetes Node,将节点加入Kubernetes 集群中
  • 部署Dashboard Web 页面,可视化查看Kubernetes 资源

五、准备环境

角色 IP地址 组件
master01 192.168.5.3 docker,kubectl,kubeadm,kubelet
node01 192.168.5.4 docker,kubectl,kubeadm,kubelet
node02 192.168.5.5 docker,kubectl,kubeadm,kubelet

六、环境初始化

1.检查操作系统的版本

此方式下安装kubernetes集群要求Centos版本要在7.5或之上

1
2
[root@master ~]# cat /etc/redhat-release
Centos Linux 7.5.1804 (Core)

2.主机名解析

为了方便集群节点间的直接调用,在这个配置一下主机名解析,企业中推荐使用内部DNS服务器

主机名成解析 编辑三台服务器的/etc/hosts文件,添加下面内容

1
2
3
192.168.5.3 master
192.168.5.4 node1
192.168.5.5 node2

3.时间同步

kubernetes要求集群中的节点时间必须精确一直,这里使用chronyd服务从网络同步时间

启动chronyd服务

1
2
3
[root@master ~]# systemctl start chronyd
[root@master ~]# systemctl enable chronyd
[root@master ~]# date

外网环境可以使用进行 ntpdate 同步 ntp1.aliyun.com 时间

1
2
3
4
5
[root@localhost ~]# yum install ntpdate -y 
[root@localhost ~]# ntpdate ntp1.aliyun.com
17 Jun 14:34:03 ntpdate[5782]: step time server 120.25.115.20 offset 243455.286640 sec
[root@localhost ~]# date
2022年 06月 17日 星期五 14:34:07 CST

企业中建议配置内部的同步时间服务器

4.禁用iptable和firewalld服务

kubernetes和docker 在运行的中会产生大量的iptables规则,为了不让系统规则跟它们混淆,直接关闭系统的规则

1.关闭firewalld服务

1
2
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld

2.关闭iptables服务

1
2
[root@master ~]# systemctl stop iptables
[root@master ~]# systemctl disable iptables

5.禁用selinux

selinux是linux系统一下的一个安全服务,如果不关闭它,在安装集群中会产生各种各样的奇葩问题

编辑/etc/selinux/config 文件

1
[root@master ~]# vim /etc/selinux/config

修改SELINUX的值为disable

1
SELINUX=disabled
  • 注意修改完毕之后需要重启linux服务

6.禁用swap分区

swap分区值的是虚拟内存分区,它的作用是物理内存使用完,之后将磁盘空间虚拟成内存来使用,启用sqap设备会对系统的性能产生非常负面的影响,因此kubernetes要求每个节点都要禁用swap设备,但是如果因为某些原因确实不能关闭swap分区,就需要在集群安装过程中通过明确的参数进行配置说明
编辑分区配置文件/etc/fstab,注释掉swap分区一行

1
[root@master ~]# vim /etc/fstab

注释掉 /dev/mapper/centos-swap swap

1
# /dev/mapper/centos-swap swap
  • 注意修改完毕之后需要重启linux服务

7.修改linux的内核参数

修改linux的内核采纳数,添加网桥过滤和地址转发功能
编辑/etc/sysctl.d/kubernetes.conf文件,

1
[root@master ~]# vim /etc/sysctl.d/kubernetes.conf

添加如下配置:

1
2
3
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

重新加载配置

1
[root@master ~]# sysctl -p

加载网桥过滤模块

1
[root@master ~]# modprobe br_netfilter

查看网桥过滤模块是否加载成功

1
[root@master ~]# lsmod | grep br_netfilter

8.配置ipvs功能

在Kubernetes中Service有两种带来模型,一种是基于iptables的,一种是基于ipvs的两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块

1.安装ipset和ipvsadm

1
[root@master ~]# yum install ipset ipvsadmin -y

若提示:No package ipvsadmin available
请使用

1
[root@master ~]# yum install ipvsadm

2.添加需要加载的模块写入脚本文件

1
2
3
4
5
6
7
8
[root@master ~]# cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

3.为脚本添加执行权限

1
[root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules

4.执行脚本文件

1
[root@master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules

5.查看对应的模块是否加载成功

1
[root@master ~]# lsmod | grep -e -ip_vs -e nf_conntrack_ipv4
安装问题

若提示:

1
没有可用软件包 ipvsadmin。

请使用

1
[root@master ~]# yum install ipvsadm

其他方式(不一定可行)
当出现这个提示的时候,Linux 是在告诉我们 yum 源中已经没有对应的安装包了,此刻需要我们安装 epel 。

何为 epel ,全称为 Extra Packages for Enterprise Linux 企业版 Linux 额外包,需要我们更新下 epel 这个第三方库

1
yum install -y epel-release

9.安装docker

1.切换镜像源

1
[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2.查看当前镜像源中支持的docker版本

1
[root@master ~]# yum list docker-ce --showduplicates

3.安装特定版本的docker-ce
必须制定–setopt=obsoletes=0,否则yum会自动安装更高版本

1
[root@master ~]# yum install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7 -y

4.添加一个配置文件
Docker 在默认情况下使用Vgroup Driver为cgroupfs,而Kubernetes推荐使用systemd来替代cgroupfs

1
2
3
4
5
6
7
[root@master ~]# mkdir /etc/docker
[root@master ~]# cat <<EOF> /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
}
EOF

5.启动dokcer

1
2
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker
安装问题
1
-bash: wget: 未找到命令

原因是没有安装wget,输入命令:yum -y install wget ,wget及其依赖将会被安装

1
yum -y install wget

查看wget 是否安装

1
rpm -qa | grep "wget"

10 安装Kubernetes组件

1.由于kubernetes的镜像在国外,速度比较慢,这里切换成国内的镜像源

2.编辑/etc/yum.repos.d/kubernetes.repo

1
[root@master ~]# vim /etc/yum.repos.d/kubernetes.repo

添加下面的配置

1
2
3
4
5
6
7
8
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgchech=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

也可将上面步骤直接化简为创建写入

1
2
3
4
5
6
7
8
9
10
[root@master ~]# cat <<EOF> /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgchech=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装问题

国内地址报错;

1
2
https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for kubernetes
正在尝试其它镜像。

阿里云镜像仓库地址,由http://mirror.aliyun.com 变为 https://mirrors.aliyun.com, 注意测试地址

3.安装kubeadm、kubelet和kubectl

1
[root@master ~]# yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y

查看kubeadm、kubelet、kubectl版本

1
2
3
kubelet --version
kubeadm version
kubectl version
安装问题

提示

1
2
3
4
e3438a5f740b3a907758799c3be2512a4b5c64dbe30352b2428788775c6b359e-kubectl-1.13.3-0.x86_64.rpm 的公钥尚未安装

失败的软件包是:kubectl-1.13.3-0.x86_64
GPG 密钥配置为:https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

解决:使用 yum install xxx.rpm –nogpgcheck 命令格式跳过公钥检查,比如跳过kubectl和kubeadm的公钥检查如下命令

注意顺序 kubelet kubectl kubeadm

1
2
3
yum install --setopt=obsoletes=0 kubelet-1.17.4-0 --nogpgcheck -y
yum install --setopt=obsoletes=0 kubectl-1.17.4-0 --nogpgcheck -y
yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 --nogpgcheck -y

4.配置kubelet的cgroup
编辑/etc/sysconfig/kubelet

1
[root@master ~]# vim /etc/sysconfig/kubelet

添加下面的配置

1
2
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

5.设置kubelet开机自启

1
[root@master ~]# systemctl enable kubelet

11.准备集群镜像

在安装kubernetes集群之前,必须要提前准备好集群需要的镜像,所需镜像可以通过下面命令查看

1
[root@master ~]# kubeadm config images list

下载镜像
此镜像kubernetes的仓库中,由于网络原因,无法连接,下面提供了一种替换方案

1
2
3
4
5
6
7
8
9
[root@master ~]# images=(
kube-apiserver:v1.17.4
kube-controller-manager:v1.17.4
kube-scheduler:v1.17.4
kube-proxy:v1.17.4
pause:3.1
etcd:3.4.3-0
coredns:1.6.5
)
1
2
3
4
5
[root@master ~]# for imageName in ${images[@]};do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

或者
一般地,国内无法直接下载 k8s.gcr.io 的镜像。方式有二:
1、在初始化k8s时,使用阿里云镜像地址,此地址可以顺利下载,见下初始化。
2、下载好前述镜像。使用如下脚本pullk8s.sh(注意脚本必须添加x属性):

1
touch pullk8s.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#!/bin/bash

# 下面的镜像应该去除"k8s.gcr.io/"的前缀,版本换成kubeadm config images list命令获取到的版本(执行得到v1.17.17,可用v.17.3)

images=(

kube-apiserver:v1.17.3

kube-controller-manager:v1.17.3

kube-scheduler:v1.17.3

kube-proxy:v1.17.3

pause:3.1

etcd:3.4.3-0

coredns:1.6.5

)

for imageName in ${images[@]} ; do

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName

done

赋权限

1
chmod +x pullk8s.sh

执行

1
bash pullk8s.sh  (或 ./pullk8s.sh)

12.集群初始化

下面的操作只需要在master节点上执行即可

创建集群

1
2
3
4
5
6
[root@master ~]# kubeadm init \
--apiserver-advertise-address=192.168.5.3 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.17.4 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

安装完成提示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.5.3:6443 --token ptga1f.5xzhj4vlb2ulyoyq \
--discovery-token-ca-cert-hash sha256:02e3b1b30ade0998bc6d70453d1a0c9143ca7e802fea9ef01e065973816bed56
1
2
3
4
5

# 创建必要文件
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
安装问题
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@localhost ~]# kubeadm init \
> --apiserver-advertise-address=192.168.5.3 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version=v1.17.4 \
> --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.244.0.0/16
W0614 18:07:59.199070 3729 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0614 18:07:59.199180 3729 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.4
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster. Kubelet version: "1.24.1" Control plane version: "1.17.4"
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

原因 kubelet 和 kubeadm 版本不一致

1
2
3
4
5

[root@localhost ~]# kubelet --version
Kubernetes v1.24.1
[root@localhost ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:01:11Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

解决办法

1
2
3
4
5
6
7
8
9
10
11
[root@localhost ~]# yum -y remove kubelet  

[root@localhost ~]# yum install --setopt=obsoletes=0 kubelet-1.17.4-0 --nogpgcheck -y

[root@localhost ~]# systemctl enable kubelet && systemctl restart kubelet

[root@localhost ~]# yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 --nogpgcheck -y

[root@localhost ~]# yum -y remove kubectl

[root@localhost ~]# yum install --setopt=obsoletes=0 kubectl-1.17.4-0 --nogpgcheck -y
安装问题
1
2
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

解决办法

1
swapoff -a && kubeadm reset  && systemctl daemon-reload && systemctl restart kubelet  && iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

重新执行

1
2
3
4
5
6
[root@localhost ~]# kubeadm init \
> --apiserver-advertise-address=192.168.5.3 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version=v1.17.4 \
> --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.244.0.0/16
下面的操作只需要在node节点上执行即可
1
2
kubeadm join 192.168.5.3:6443 --token ptga1f.5xzhj4vlb2ulyoyq \
--discovery-token-ca-cert-hash sha256:02e3b1b30ade0998bc6d70453d1a0c9143ca7e802fea9ef01e065973816bed56
安装问题
1
2
3
[root@localhost ~]# kubeadm join 192.168.5.3:6443 --token ptga1f.5xzhj4vlb2ulyoyq     --discovery-token-ca-cert-hash sha256:02e3b1b30ade0998bc6d70453d1a0c9143ca7e802fea9ef01e065973816bed56
W0617 14:28:32.053766 11654 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks

只有这个警告,下面没有其他错误信息
解决办法:
这个错误很可能是由于node节点和master节点的服务器时间不一致导致的。
先同步服务器时间,再重新kubeadm join。

1
2
3
4
5
[root@localhost ~]# yum install ntpdate -y 
[root@localhost ~]# ntpdate ntp1.aliyun.com
17 Jun 14:34:03 ntpdate[5782]: step time server 120.25.115.20 offset 243455.286640 sec
[root@localhost ~]# date
2022年 06月 17日 星期五 14:34:07 CST
安装问题
1
2
3
4
5
6
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

原因:已经加入过一次,需要先清除,重置后再加入
在node节点执行kubeadm reset

1
[root@localhost ~]# kubeadm reset
安装问题
1
2
3
4
5
6
7
[root@node2 ~]# kubeadm join 192.168.5.3:6443 --token ptga1f.5xzhj4vlb2ulyoyq     --discovery-token-ca-cert-hash sha256:02e3b1b30ade0998bc6d70453d1a0c9143ca7e802fea9ef01e065973816bed56
W0617 15:16:33.251015 4207 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized
To see the stack trace of this error execute with --v=5 or higher

解决办法是: token过期了,24小时内有效果的,所以可以重新生成token

主节点执行

1
2
3
4
5
6
7
8
9
10
11
[root@localhost ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
ptga1f.5xzhj4vlb2ulyoyq <invalid> 2022-06-15T18:33:11+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
[root@localhost ~]# kubeadm token create
W0617 15:21:36.403405 5995 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0617 15:21:36.403480 5995 validation.go:28] Cannot validate kubelet config - no validator is available
0mtxqm.ucoojki6c4od2ojb
[root@localhost ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
0mtxqm.ucoojki6c4od2ojb 23h 2022-06-18T15:21:36+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
ptga1f.5xzhj4vlb2ulyoyq <invalid> 2022-06-15T18:33:11+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
1
kubeadm join 192.168.5.3:6443 --token 0mtxqm.ucoojki6c4od2ojb     --discovery-token-ca-cert-hash sha256:02e3b1b30ade0998bc6d70453d1a0c9143ca7e802fea9ef01e065973816bed56

在master上查看节点信息

1
2
3
4
5
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 6m v1.17.4
node1 NotReady <none> 22s v1.17.4
node2 NotReady <none> 19s v1.17.4

13 安装网络插件,只在master节点操作即可

1
[root@master ~]# wget  https://github.com/flannel-io/flannel/tree/master/Documentation/kube-flannel.yml

由于外网不好访问,如果出现无法访问的情况,可以直接用下面的 记得文件名是kube-flannel.yml,位置:/root/kube-flannel.yml内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.13.1-rc1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.13.1-rc1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg

使用配置文件启动fannel

1
kubectl apply -f kube-flannel.yml

等待它安装完毕 发现已经是 集群的状态已经是Ready
注意此处可能久一点,

安装问题

此处出现以下日志,不影响

1
2
3
4
5
6
7
[root@master ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.apps/kube-flannel-ds unchanged
安装问题

若localhost.localdomain NotReady

1
2
3
4
5
6

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
localhost.localdomain NotReady master 2d22h v1.17.4
node1 Ready <none> 128m v1.17.4
node2 Ready <none> 127m v1.17.4

解决办法

修改主机名称

修改配置文件 /etc/hostname 保存退出
进入vi,删除旧主机名,输入新主机名,Esc后冒号 wq退出保存,重启生效

1
2
[root@master ~]# vi /etc/hostname 
[root@master ~]# reboot

然后,主节点,和字节点重置

1
kubeadm reset

再重新执行 集群初始化

14.使用kubeadm reset重置集群

在master节点之外的节点进行操作

1
2
3
4
5
6
7
8
9
10
11
[root@node01 ~]# kubeadm reset
[root@node01 ~]# systemctl stop kubelet
[root@node01 ~]# systemctl stop docker
[root@node01 ~]# rm -rf /var/lib/cni/
[root@node01 ~]# rm -rf /var/lib/kubelet/*
[root@node01 ~]# rm -rf /etc/cni/
[root@node01 ~]# ifconfig cni0 down
[root@node01 ~]# ifconfig flannel.1 down
[root@node01 ~]# ifconfig docker0 down
[root@node01 ~]# ip link delete cni0
[root@node01 ~]# ip link delete flannel.1

重启kubelet

1
[root@node01 ~]# systemctl restart kubelet

重启docker

1
[root@node01 ~]# systemctl restart docker

15.重启kubelet和docker

1
2
3
4
# 重启kubelet
systemctl restart kubelet
# 重启docker
systemctl restart docker

使用配置文件启动fannel

1
[root@node01 ~]# kubectl apply -f kube-flannel.yml

等待它安装完毕 发现已经是 集群的状态已经是Ready

16.kubeadm中的命令

1
2
# 生成 新的token
[root@master ~]# kubeadm token create --print-join-command

七、集群测试

1.创建一个nginx服务

1
[root@master ~]# kubectl create deployment nginx  --image=nginx:1.14-alpine

2.暴露端口

1
[root@master ~]# kubectl expose deploy nginx  --port=80 --target-port=80  --type=NodePort

3.查看服务

1
[root@master ~]# kubectl get pod,svc

4.查看pod

1
2
3
4
5
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 6m55s v1.17.4
node1 Ready <none> 4m34s v1.17.4
node2 Ready <none> 4m32s v1.17.4
1
2
3
[root@master ~]# kubectl get pod,svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 115m
1
2
[root@master ~]# kubectl create deployment nginx  --image=nginx:1.14-alpine
deployment.apps/nginx created
1
2
[root@master ~]# kubectl expose deploy nginx  --port=80 --target-port=80  --type=NodePort
service/nginx exposed
1
2
3
4
5
6
7
[root@master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-6867cdf567-4g2mm 1/1 Running 0 16s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 117m
service/nginx NodePort 10.101.154.226 <none> 80:31818/TCP 4s
1
2
3
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-6867cdf567-4g2mm 1/1 Running 0 89s

浏览器测试结果:

浏览器访问:
http://192.168.5.3:31818/
http://192.168.5.4:31818/
http://192.168.5.5:31818/

一辈子很短,努力的做好两件事就好;
第一件事是热爱生活,好好的去爱身边的人;
第二件事是努力学习,在工作中取得不一样的成绩,实现自己的价值,而不是仅仅为了赚钱;

继开 wechat
欢迎加我的微信,共同交流技术